432 research outputs found

    How relevant is social interaction in second language learning?

    Get PDF
    Verbal language is the most widespread mode of human communication, and an intrinsically social activity. This claim is strengthened by evidence emerging from different fields, which clearly indicates that social interaction influences human communication, and more specifically, language learning. Indeed, research conducted with infants and children shows that interaction with a caregiver is necessary to acquire language. Further evidence on the influence of sociality on language comes from social and linguistic pathologies, in which deficits in social and linguistic abilities are tightly intertwined, as is the case for Autism, for example. However, studies on adult second language (L2) learning have been mostly focused on individualistic approaches, partly because of methodological constraints, especially of imaging methods. The question as to whether social interaction should be considered as a critical factor impacting upon adult language learning still remains underspecified. Here, we review evidence in support of the view that sociality plays a significant role in communication and language learning, in an attempt to emphasize factors that could facilitate this process in adult language learning. We suggest that sociality should be considered as a potentially influential factor in adult language learning and that future studies in this domain should explicitly target this factor

    Valence, arousal, and task effects in emotional prosody processing.

    Get PDF
    Previous research suggests that emotional prosody processing is a highly rapid and complex process. In particular, it has been shown that different basic emotions can be differentiated in an early event-related brain potential (ERP) component, the P200. Often, the P200 is followed by later long lasting ERPs such as the late positive complex. The current experiment set out to explore in how far emotionality and arousal can modulate these previously reported ERP components. In addition, we also investigated the influence of task demands (implicit vs. explicit evaluation of stimuli). Participants listened to pseudo-sentences (sentences with no lexical content) spoken in six different emotions or in a neutral tone of voice while they either rated the arousal level of the speaker or their own arousal level. Results confirm that different emotional intonations can first be differentiated in the P200 component, reflecting a first emotional encoding of the stimulus possibly including a valence tagging process. A marginal significant arousal effect was also found in this time-window with high arousing stimuli eliciting a stronger P200 than low arousing stimuli. The P200 component was followed by a long lasting positive ERP between 400 and 750 ms. In this late time-window, both emotion and arousal effects were found. No effects of task were observed in either time-window. Taken together, results suggest that emotion relevant details are robustly decoded during early processing and late processing stages while arousal information is only reliably taken into consideration at a later stage of processing

    Temporal regularity effects on pre-attentive and attentive processing of deviance

    Get PDF
    Temporal regularity allows predicting the temporal locus of future information thereby potentially facilitating cognitive processing. We applied event-related brain potentials (ERPs) to investigate how temporal regularity impacts pre-attentive and attentive processing of deviance in the auditory modality. Participants listened to sequences of sinusoidal tones differing exclusively in pitch. The inter-stimulus interval (ISI) in these sequences was manipulated to convey either isochronous or random temporal structure. In the pre-attentive session, deviance processing was unaffected by the regularity manipulation as evidenced in three event-related-potentials (ERPs): mismatch negativity (MMN), P3a, and reorienting negativity (RON). In the attentive session, the P3b was smaller for deviant tones embedded in irregular temporal structure, while the N2b component remained unaffected. These findings confirm that temporal regularity can reinforce cognitive mechanisms associated with the attentive processing of deviance. Furthermore, they provide evidence for the dynamic allocation of attention in time and dissociable pre-attentive and attention-dependent temporal processing mechanisms

    Why pitch sensitivity matters : event-related potential evidence of metric and syntactic violation detection among spanish late learners of german

    Get PDF
    Event-related potential (ERP) data in monolingual German speakers have shown that sentential metric expectancy violations elicit a biphasic ERP pattern consisting of an anterior negativity and a posterior positivity (P600). This pattern is comparable to that elicited by syntactic violations. However, proficient French late learners of German do not detect violations of metric expectancy in German. They also show qualitatively and quantitatively different ERP responses to metric and syntactic violations. We followed up the questions whether (1) latter evidence results from a potential pitch cue insensitivity in speech segmentation in French speakers, or (2) if the result is founded in rhythmic language differences. Therefore, we tested Spanish late learners of German, as Spanish, contrary to French, uses pitch as a segmentation cue even though the basic segmentation unit is the same in French and Spanish (i.e., the syllable). We report ERP responses showing that Spanish L2 learners are sensitive to syntactic as well as metric violations in German sentences independent of attention to task in a P600 response. Overall, the behavioral performance resembles that of German native speakers. The current data suggest that Spanish L2 learners are able to extract metric units (trochee) in their L2 (German) even though their basic segmentation unit in Spanish is the syllable. In addition Spanish in contrast to French L2 learners of German are sensitive to syntactic violations indicating a tight link between syntactic and metric competence. This finding emphasizes the relevant role of metric cues not only in L2 prosodic but also in syntactic processing

    Auditory Affective Norms for German: Testing the Influence of Depression and Anxiety on Valence and Arousal Ratings

    Get PDF
    BACKGROUND: The study of emotional speech perception and emotional prosody necessitates stimuli with reliable affective norms. However, ratings may be affected by the participants' current emotional state as increased anxiety and depression have been shown to yield altered neural responding to emotional stimuli. Therefore, the present study had two aims, first to provide a database of emotional speech stimuli and second to probe the influence of depression and anxiety on the affective ratings. METHODOLOGY/PRINCIPAL FINDINGS: We selected 120 words from the Leipzig Affective Norms for German database (LANG), which includes visual ratings of positive, negative, and neutral word stimuli. These words were spoken by a male and a female native speaker of German with the respective emotional prosody, creating a total set of 240 auditory emotional stimuli. The recordings were rated again by an independent sample of subjects for valence and arousal, yielding groups of highly arousing negative or positive stimuli and neutral stimuli low in arousal. These ratings were correlated with participants' emotional state measured with the Depression Anxiety Stress Scales (DASS). Higher depression scores were related to more negative valence of negative and positive, but not neutral words. Anxiety scores correlated with increased arousal and more negative valence of negative words. CONCLUSIONS/SIGNIFICANCE: These results underscore the importance of representatively distributed depression and anxiety scores in participants of affective rating studies. The LANG-audition database, which provides well-controlled, short-duration auditory word stimuli for the experimental investigation of emotional speech is available in Supporting Information S1

    Lower Beta: A Central Coordinator of Temporal Prediction in Multimodal Speech

    Get PDF
    How the brain decomposes and integrates information in multimodal speech perception is linked to oscillatory dynamics. However, how speech takes advantage of redundancy between different sensory modalities, and how this translates into specific oscillatory patterns remains unclear. We address the role of lower beta activity (~20 Hz), generally associated with motor functions, as an amodal central coordinator that receives bottom-up delta-theta copies from specific sensory areas and generate top-down temporal predictions for auditory entrainment. Dissociating temporal prediction from entrainment may explain how and why visual input benefits speech processing rather than adding cognitive load in multimodal speech perception. On the one hand, body movements convey prosodic and syllabic features at delta and theta rates (i.e., 1–3 Hz and 4–7 Hz). On the other hand, the natural precedence of visual input before auditory onsets may prepare the brain to anticipate and facilitate the integration of auditory delta-theta copies of the prosodic-syllabic structure. Here, we identify three fundamental criteria based on recent evidence and hypotheses, which support the notion that lower motor beta frequency may play a central and generic role in temporal prediction during speech perception. First, beta activity must respond to rhythmic stimulation across modalities. Second, beta power must respond to biological motion and speech-related movements conveying temporal information in multimodal speech processing. Third, temporal prediction may recruit a communication loop between motor and primary auditory cortices (PACs) via delta-to-beta cross-frequency coupling. We discuss evidence related to each criterion and extend these concepts to a beta-motivated framework of multimodal speech processing

    Dynamic Facial Expressions Prime the Processing of Emotional Prosody

    Get PDF
    Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency

    Comment:The Next Frontier: Prosody Research Gets Interpersonal

    Get PDF
    Neurocognitive models (e.g., Schirmer & Kotz, 2006) have helped to characterize how listeners incrementally derive meaning from vocal expressions of emotion in spoken language, what neural mechanisms are involved at different processing stages, and their relative time course. But how can these insights be applied to communicative situations in which prosody serves a predominantly interpersonal function? This comment examines recent data highlighting the dynamic interplay of prosody and language, when vocal attributes serve the sociopragmatic goals of the speaker or reveal interpersonal information that listeners use to construct a mental representation of what is being communicated. Our comment serves as a beacon to researchers interested in how the neurocognitive system “makes sense” of socioemotive aspects of prosody
    corecore